136 research outputs found

    Optimal Checkpointing for Secure Intermittently-Powered IoT Devices

    Full text link
    Energy harvesting is a promising solution to power Internet of Things (IoT) devices. Due to the intermittent nature of these energy sources, one cannot guarantee forward progress of program execution. Prior work has advocated for checkpointing the intermediate state to off-chip non-volatile memory (NVM). Encrypting checkpoints addresses the security concern, but significantly increases the checkpointing overheads. In this paper, we propose a new online checkpointing policy that judiciously determines when to checkpoint so as to minimize application time to completion while guaranteeing security. Compared to state-of-the-art checkpointing schemes that do not account for the overheads of encrypted checkpoints we improve execution time up to 1.4x.Comment: ICCAD 201

    Causative Cyberattacks on Online Learning-based Automated Demand Response Systems

    Full text link
    Power utilities are adopting Automated Demand Response (ADR) to replace the costly fuel-fired generators and to preempt congestion during peak electricity demand. Similarly, third-party Demand Response (DR) aggregators are leveraging controllable small-scale electrical loads to provide on-demand grid support services to the utilities. Some aggregators and utilities have started employing Artificial Intelligence (AI) to learn the energy usage patterns of electricity consumers and use this knowledge to design optimal DR incentives. Such AI frameworks use open communication channels between the utility/aggregator and the DR customers, which are vulnerable to \textit{causative} data integrity cyberattacks. This paper explores vulnerabilities of AI-based DR learning and designs a data-driven attack strategy informed by DR data collected from the New York University (NYU) campus buildings. The case study demonstrates the feasibility and effects of maliciously tampering with (i) real-time DR incentives, (ii) DR event data sent to DR customers, and (iii) responses of DR customers to the DR incentives

    Maintenance Registers with Boundary Scan Interface

    Get PDF
    Concurrent Fault Detector Circuits (CFDCs) are test components of a main system, e.g. an Application Specific Integrated Circuit, and provide the results of the tests in parallel to at least one Error Source Register (ESR). Instead of reading out the ESR in parallel, its contents are copied to a serial shadow register so the contents can be read out in series to an error correcting application, thus reducing the number of output pins and the burden on resources of the main system. The ESR\u27s receipt and transfer of information is under the control of a Boundary Scan Interface. In one embodiment, the test results are prioritized and compared to data in a mask register so that only important errors create a system interrupt which causes the read out of data from the shadow register

    Chip-Chat: Challenges and Opportunities in Conversational Hardware Design

    Full text link
    Modern hardware design starts with specifications provided in natural language. These are then translated by hardware engineers into appropriate Hardware Description Languages (HDLs) such as Verilog before synthesizing circuit elements. Automating this translation could reduce sources of human error from the engineering process. But, it is only recently that artificial intelligence (AI) has demonstrated capabilities for machine-based end-to-end design translations. Commercially-available instruction-tuned Large Language Models (LLMs) such as OpenAI's ChatGPT and Google's Bard claim to be able to produce code in a variety of programming languages; but studies examining them for hardware are still lacking. In this work, we thus explore the challenges faced and opportunities presented when leveraging these recent advances in LLMs for hardware design. Given that these `conversational' LLMs perform best when used interactively, we perform a case study where a hardware engineer co-architects a novel 8-bit accumulator-based microprocessor architecture with the LLM according to real-world hardware constraints. We then sent the processor to tapeout in a Skywater 130nm shuttle, meaning that this `Chip-Chat' resulted in what we believe to be the world's first wholly-AI-written HDL for tapeout.Comment: 6 pages, 8 figures. Accepted in 2023 ACM/IEEE 5th Workshop on Machine Learning for CAD (MLCAD

    FLAG: Finding Line Anomalies (in code) with Generative AI

    Full text link
    Code contains security and functional bugs. The process of identifying and localizing them is difficult and relies on human labor. In this work, we present a novel approach (FLAG) to assist human debuggers. FLAG is based on the lexical capabilities of generative AI, specifically, Large Language Models (LLMs). Here, we input a code file then extract and regenerate each line within that file for self-comparison. By comparing the original code with an LLM-generated alternative, we can flag notable differences as anomalies for further inspection, with features such as distance from comments and LLM confidence also aiding this classification. This reduces the inspection search space for the designer. Unlike other automated approaches in this area, FLAG is language-agnostic, can work on incomplete (and even non-compiling) code and requires no creation of security properties, functional tests or definition of rules. In this work, we explore the features that help LLMs in this classification and evaluate the performance of FLAG on known bugs. We use 121 benchmarks across C, Python and Verilog; with each benchmark containing a known security or functional weakness. We conduct the experiments using two state of the art LLMs in OpenAI's code-davinci-002 and gpt-3.5-turbo, but our approach may be used by other models. FLAG can identify 101 of the defects and helps reduce the search space to 12-17% of source code
    • …
    corecore